5.5 Sequential Games
[5.2 Using Game Theory] [5.3 Classic Game Models] [5.4 Simultaneous Games]
[5.6 Oligopoly] [5.7 Network Effects]
Sequential games are those
in which players make moves at different times or in turn. This means that
players who move later in the game have additional information about the
actions of other players or states of the world. This also means that players
who move first can often influence the game. Each player's strategy makes the
actions that he or she chooses conditional on the additional information
received during the game.
Understanding sequential
games is very important in business. It is common for business planners to
apply rule of thumb approaches and static analysis to situations. However, this
approach ignores the fact that strategic situations are often vastly different
from one another and very dynamic. Modelling business situations as sequential
games forces a planner to consider these aspects and allows for better
forecasting and planning, both of which lead directly to better
decision-making.
Extensive Form Representation of Games
A common way of representing games, especially sequential games, is the extensive form representation, which
uses game trees. Game trees are made up of nodes and branches, which are used to represent the sequence of moves
and the available actions, respectively. Consider two players, Mr
Black and Ms White, who are playing a sequential game. Mr Black moves first and
has the option of Up or Down. Ms White then observes his action. Regardless of
what Mr Black chooses, she then has the option of High or Low. The game tree
for this game would appear as follows:
In this subject, decisions are represented by square nodes. Node a is the decision node where Mr Black chooses
between Up and Down. Since node a is
the first node, it is also known as the initial node. Nodes b and c are the decision nodes at which Ms White chooses between High and
Low. The triangle-shaped ending nodes on the right are the terminal nodes, which also have the payoffs for each player associated with each outcome listed
beside them.
Sometimes, one player's action at a given stage can change the options
available at subsequent stages. Suppose that you adjust the above game so that
if Mr Black plays Down, Ms White can play High, Medium and Low. In this
situation, the game tree would look as follows:
In sequential games, it is
important to clearly define what is meant by strategy. Game theorists define a strategy as a
complete contingent plan of actions. In other words, a strategy specifies what
action a player will take at each decision node. Consider once again the game
between Mr Black and Ms White.
Mr Black has two strategies available — Up and Down. Ms White, however,
actually has four strategies available since there are two nodes to consider — b and c — and two possible actions at each node — High and Low. The
following table shows the strategies available to Ms White:
|
If Mr Black Plays Up, Play: |
If Mr Black Plays Down, Play: |
Ms White's Strategy 1 |
High |
High |
Ms White's Strategy 2 |
High |
Low |
Ms White's Strategy 3 |
Low |
High |
Ms White's Strategy 4 |
Low |
Because actions always lead to reactions, an important aspect of
strategy in sequential games is that players must consider — and plan for —
their opponent's reactions. In the example above, if Mr Black wants to maximise
his payoff, he must consider how Ms White will react if he moves Up and how she
will react if he moves Down.
The following animation illustrates how game trees can be used to map
out sequential games. As you watch, take special note of how the decisions of
one player affect the strategy choices available to the other.
Decision and game trees are used to map out the types of scenarios
businesses encounter every day. Decision trees are used to map out scenarios
involving only one player; game trees are meant to handle scenarios with
multiple players. In either case, it is important to remember that trees are
simply tools — they do not take the decision out of the decision-maker's hands.
Instead, they are intended to channel that person's experience and intuition
toward the goal of finding the best strategy given the known alternatives. In
the end, the decision-maker is left with a likely "probability"
rather than a simple guess.
Every tree is based on certain assumptions. The goal is to limit these
to the most relevant assumptions for the given scenario. In this way, the game
tree tool remains useful and manageable. Otherwise, if every imaginable
situation was included, evaluating the remote chance of, say, nuclear war,
would get in the way of assessing the more relevant chance that a competitor
has the resources to enter the market first.
Game tree
models allow players to make better decisions by forcing them to consider the
actions and reactions of all other players involved. In a sequential game, the
decision-maker eliminates a great deal of uncertainty simply by creating a
clear-cut list of the various players, their actions and reactions, and the
decision-maker's best response to each. The game tree provides a formal means
to keep track of these items. Without this method, many players or alternatives
might be overlooked and therefore never planned for. Lack of planning leads to
surprises, and surprises from the competition are rarely enjoyable.
For example,
suppose player 1 is considering whether to move Up or Down in its game with
player 2 and creates the following game tree without considering player 2's
payoffs.
With the current tree, player 1 has no way of knowing what player 2 will
do at node b. Player 2 has an equal
chance of choosing Up (giving player 1 a payoff of $500) or choosing Down
(giving player 1 only $100). Based on the available information, player 1 has
no choice but to assume that he or she has no better than a 50/50 chance of
getting $500 by choosing Up, but is guaranteed to get $450 by choosing Down. So
while player 1 might want the $500, he or she will optimally choose Down and
take the $450 to avoid the risk.
Without taking an opponent's payoffs or motivations into account, a
player cannot be confident in making a decision. The tree below features player
2's payoffs. Is Down still an optimal choice for player 1 now that player 2's
payoffs have been considered?
The answer is no. When player 1 takes player 2's reactions into account,
player 1 sees that Up is the optimal move at node a because player 2 would also optimally choose Up at node b (where player 2 will earn $300 rather
than only $200 from Down). By considering all the elements of this game, player
1 ends up with the $500 originally hoped for.
In addition to
decision nodes, a game tree can include chance nodes (represented in this
subject by circles). Chance nodes are used to symbolise events that are
uncertain or beyond a primary player's direct control.
Suppose you
were standing at the foot of the Himalayas and had the option of either trying
to climb Mt. Everest or walking away. In a decision or game tree, that choice
(Climb or Don't Climb) would be represented by a decision node because it is
entirely in the hands of you, the player — only you can decide whether you will
try to climb the mountain.
On the other
hand, how far up the mountain you will get is largely uncertain or "up to
chance". You can aspire to reach the summit, but you cannot merely decide
to get there — weather, injury, lack of sufficient supplies, etc, could all end
your climb early. Therefore, the question of whether your climb will succeed or
fail is represented by a chance node (indicated by the letter C), as in the
following tree.
Every game is built around
the actions and payoffs of the primary players. Chance players are people or organisations
that have the potential to influence the game, even though these players are
not directly competing for the game's payoffs. When constructing a game tree,
chance nodes are used to represent events (pure chance) or players that are out
of the direct control of the primary players but that, nonetheless, must still
be factored into the decisions made by the primary players.
In the case of the
competing researchers, a government agency interested in possibly giving a
research grant to one of the competing companies would be considered a chance
player. This is because the primary players cannot decide whether (or to whom)
the agency will actually give the grant. It is out of the hands of the primary
players.
The important thing to
remember is that primary players — those competing directly for that game's
payoffs — have no way of forcing chance players to act in a desired or
beneficial manner. By definition, chance players are not affected or directly
influenced by the actions of the primary players.
Returning to the example of
climbing Mt. Everest, a chance player, in this case, might be an experienced Sherpa guide who has the choice of leading your expedition
or that of another climber. You and the other climber would want the Sherpa's help, but neither of you can control the decision.
It is entirely up to the Sherpa to decide which
expedition to lead. The following tree illustrates the way chance players and
pure chance (C) work in the context of decision or game trees:
As with
assumptions, it is important to limit chance events to those most relevant to
the scenario the decision-maker is analysing.
Click on the links below to see two examples that further illustrate how game trees can be used.
After a scenario is mapped out in a game tree model and the different
possibilities resulting from the different options are clearly laid out, a
decision-maker will still not be able to choose the best strategy. The next
step after creating the tree is to "solve" it — to begin to strip away
unlikely branches until only the dominant strategy remains. This is done
through backward induction.
In this process, the game
tree is essentially flipped. Working backwards from the payoff, the
decision-maker begins to eliminate suboptimal actions until only the most
likely path remains. By doing this, an opponent's likely moves from the initial
node to the payoff can be mapped, allowing the decision-maker to strategize for
each of those potential moves and ultimately find equilibrium.
Backward induction assumes
that players will move optimally at each node — that opponents can be expected
to act in their own best interests. Knowing this, a decision-maker working to
solve a tree can confidently eliminate actions that are suboptimal to his or
her opponents. For example, consider the earlier game between Mr Black and Ms
White.
At node b, playing High gives
Ms White a payoff of 0, while playing Low gives her a payoff of 2. Therefore,
Ms White would rationally choose to play Low. We can ignore the possibility of
Ms White's playing High at node b.
Similarly, we can ignore the possibility that Ms White will play Low at node c since her payoff for High is 1 and for
Low is 0. In short, of the four strategies available to Ms White, backward induction
implies that her only rational strategy is to play Low at node b and High at node c. This implies that Mr Black's choices look as follows:
Notice that Mr Black's optimal strategy is now obvious — play Down. Down
yields a payoff of 2 while Up gives a payoff of only 1. Consequently, the
equilibrium of this game is Mr Black playing Down and Ms White playing Low if
Mr Black plays Up and playing High if Mr Black plays Down. This is called the subgame perfect equilibrium of the game.
It is important to note
that all subgame perfect equilibria
are Nash equilibria.
Since backward induction ensures that each player will play his or her best
action at each node, the resulting strategies will correspond to a Nash
equilibrium. To see this, again consider the game between Mr Black and Ms
White.
Notice that there is only one subgame perfect
equilibrium. Ms White plays Low if Mr Black plays Up, and plays High if Mr
Black plays Down. Therefore, Mr Black will play Down.
However, suppose that Ms White has adopted a strategy that states that
she should always play Low and Mr Black has chosen to play Up. Can a Nash
equilibrium be reached in this case?
A Nash equilibrium implies that no player can do better by switching
strategy given the strategies of the other players. If Ms White switches her
action to High at node b but still
chose Low at node c, then she would
be worse off given that Mr Black is playing Up. If she switches her action to
high at node c, continuing to choose
Low at b, then she would be no better
off given that Mr Black already is playing Up. Similarly, with Ms White
committed to always playing Low, if Mr Black chose to switch his strategy to
play Down, he would be worse off. Thus, no player can unilaterally be better
off by switching his or her strategy.
We have shown that this result is a Nash equilibrium, but it is not a subgame
perfect equilibrium. This is because it violates the rules of backward
induction, which hold that Ms White would never
choose Low at node c. In summary, all
subgame perfect equilibria
are Nash equilibrium, but not all Nash equilibrium are subgame
perfect equilibria.
You can also determine
whether Ms White always playing Low and Mr Black playing Up is a Nash equilibrium
by using the following payoff matrix:
Payoff Matrix |
Mr Black |
||
Up |
Down |
||
Ms White |
High/High |
0, 0 |
2, 1 |
High/Low |
0, 0 |
0, 0 |
|
Low/High |
1, 2 |
2, 1 |
|
Low/Low |
1, 2 |
0, 0 |
Note: Ms White's choices specify
her actions at node b and node c, respectively.
Some games include both sequential and simultaneous elements. For
example, a game might initially consist of an entry decision by one firm. If
the firm enters, there will then be a simultaneous competition game. The game
would be solved by backward induction as well; in the last stage, you can solve
for the Nash equilibrium and work back in the tree, assuming the payoffs from
the Nash equilibrium in the last stage.
In the game between Mr
Black and Ms White, Mr Black was able to use a first-mover advantage to achieve the
outcome that he preferred. Many times, by moving first, a player can determine
the direction of the game — forcing other players to then react to that choice
rather than moving on independently. However, not all sequential games have a
first-mover advantage. In fact, some have a second-mover advantage. For
example, consider the following game:
If Amy chooses Up, Bernard will optimally choose Zag.
If Amy chooses Down, Bernard will optimally choose Zig.
In both cases, Bernard will end up with a payoff of 1 while Amy will have a
payoff of –1. However, if we switch the order so that Bernard moves first and
Amy moves second, then Amy will end up with a payoff of 1 while Bernard will
have a payoff of –1. Therefore, this type of game has a second-mover advantage.
Do first movers
always have an advantage? Click on the following here to see why moving first
might be a disadvantage.
Click on the following here for an advanced explanation of how to solve
trees.
An important facet of sequential games is that players would often like
to arrive at a particular Nash equilibrium but cannot because it is not a subgame perfect equilibrium. Interestingly, this is often
the player's own fault. Consider the game between Mr Black and Ms White again.
Ms White prefers Outcome X to Outcome Y, but she cannot get there unless
Mr Black plays Up. She needs some method to induce Mr Black to play Up. Notice
that Mr Black would prefer to play Up if Ms White always played Low. This would
result in Outcome X.
You might wonder why Ms White cannot simply threaten to always play Low,
and thereby, achieve Outcome X. This is because the threat is a . Mr Black knows that once he
plays Down, Ms White will choose High regardless of the threats that she made.
Playing High is simply Ms White's best move at node c.
This demonstrates an important concept in game theory — the value of
commitment. If Ms White could
credibly commit to always play Low, then Mr Black would choose to play Up and
Outcome X would result.
Click on the following link to see how one company's commitment was
beneficial to keeping rivals out of the market.
Commitment
limits the options available to a player. People often make a mistake by
assuming that more options are better. However, game theory presents several
situations in which more options make players worse off. Therefore, players
often find it in their best interests to limit their available actions in
advance, such as when Cortes burned his ships to eliminate retreat as an
option.
Unfortunately,
though, eliminating options in a manner as physical and permanent as the
burning of a ship is rarely possible in business. However, if a player is able
to commit to a course of action in such a way that all other players recognise
that the committed player will never do the opposite, that player has
effectively eliminated that undesirable option.
The term
"value of commitment" originally referred to the numeric difference
between the payoff a player receives by committing to a strategy versus the
payoff that player receives if he or she fails to commit. Although this subject
is concerned with the conceptual value of commitment, it is interesting to note
that this value can often be quantifiably measured. For example, consider the
earlier game between Mr Black and Ms White.
Ms White receives a payoff of 1 if she cannot commit to playing Low.
However, if she can commit, she receives a payoff of 2. Therefore, her value of
commitment can be expressed numerically as 2 &150; 1 = 1. She would be
willing to pay any amount up to 1 to commit to playing Low because she gains 1
by such commitment.
Now try the following exercise, which allows you to apply the knowledge you have learned about solving game trees. Click on the here to launch the exercise.
Topic Summary
In this
topic you have learnt how to
·
represent
sequential games as game trees
·
solve
sequential games by working backwards
·
use
sequential games to better understand the value of commitment and first mover
advantages in strategic situations.
Now go to
topic 5.6, “Oligopoly”.